207 research outputs found

    Re-presenting China in Digital Immersive Art: Virtual Reality, Imaginaries, and Cultural Presence

    Get PDF
    The thesis explores how digital technology, in particular virtual reality and augmented reality, is playing a role in China’s rejuvenation, especially in relation to cultural displays, performances, and art exhibitions. This project examines how audiences, both in China and globally, respond to ‘Digital China’, a concept describing how people’s everyday lives in China are becoming superconnected by digital technology. Qualitative methodology with a multi-perspectival approach is applied to advance the aim of the project

    Towards Privacy-Preserving Person Re-identification via Person Identify Shift

    Full text link
    Recently privacy concerns of person re-identification (ReID) raise more and more attention and preserving the privacy of the pedestrian images used by ReID methods become essential. De-identification (DeID) methods alleviate privacy issues by removing the identity-related of the ReID data. However, most of the existing DeID methods tend to remove all personal identity-related information and compromise the usability of de-identified data on the ReID task. In this paper, we aim to develop a technique that can achieve a good trade-off between privacy protection and data usability for person ReID. To achieve this, we propose a novel de-identification method designed explicitly for person ReID, named Person Identify Shift (PIS). PIS removes the absolute identity in a pedestrian image while preserving the identity relationship between image pairs. By exploiting the interpolation property of variational auto-encoder, PIS shifts each pedestrian image from the current identity to another with a new identity, resulting in images still preserving the relative identities. Experimental results show that our method has a better trade-off between privacy-preserving and model performance than existing de-identification methods and can defend against human and model attacks for data privacy

    Association Between the Ratio of Ovarian Stimulation Duration to Original Follicular Phase Length and In Vitro Fertilization Outcomes: A Novel Index to Optimise Clinical Trigger Time

    Get PDF
    The duration of ovarian stimulation which is largely dependent on the ovarian response to hormonal stimulation may influence in vitro fertilization (IVF) outcomes. Menstrual cycle length is potentially a good indicator of ovarian reserve and can predict ovarian response. Ovarian stimulation and the follicular phase of the menstrual cycle are both processes of follicular development. There is no published research to predict the duration of ovarian stimulation based on the length of the menstrual cycle. Our retrospective cohort study included 6110 women with regular menstrual cycles who underwent their first IVF treatment between January 2015 and October 2020. Cycles were classified according to quartiles of the ratio of ovarian stimulation duration to original follicular phase length (OS/FP). Multivariate generalized linear models were applied to assess the association between OS/FP and IVF outcomes. The odds ratio (OR) or relative risk (RR) was estimated for each quartile with the lowest quartile as the comparison group. OS/FP of 0.67 to 0.77 had more retrieved and mature oocytes (adjusted RR 1.11, 95% confidence interval [CI] 1.07–1.15, p for trend = 0.001; adjusted RR 1.14, 95% CI 1.09–1.19, p for trend = 0.001). OS/FP of 0.67 to 0.77 showed the highest rate of fertilization (adjusted OR 1.11, 95% CI 1.05–1.17, p for trend = 0.001). OS/FP > 0.77 had the lowest rate of high-quality blastocyst formation (adjusted OR 0.81, 95% CI 0.71–0.93, p for trend = 0.01). No apparent association was noted between OS/FP and clinical pregnancy, live birth, or early miscarriage rate. In conclusion, OS/FP has a significant effect on the number of oocytes, fertilization rate, and high-quality blastocyst formation rate. MCL could be used to predict the duration of ovarian stimulation with an OS/FP of 0.67 to 0.77, which provides a new indicator for the individualized clinical optimization of the trigger time

    Invisible Backdoor Attack with Dynamic Triggers against Person Re-identification

    Full text link
    In recent years, person Re-identification (ReID) has rapidly progressed with wide real-world applications, but also poses significant risks of adversarial attacks. In this paper, we focus on the backdoor attack on deep ReID models. Existing backdoor attack methods follow an all-to-one/all attack scenario, where all the target classes in the test set have already been seen in the training set. However, ReID is a much more complex fine-grained open-set recognition problem, where the identities in the test set are not contained in the training set. Thus, previous backdoor attack methods for classification are not applicable for ReID. To ameliorate this issue, we propose a novel backdoor attack on deep ReID under a new all-to-unknown scenario, called Dynamic Triggers Invisible Backdoor Attack (DT-IBA). Instead of learning fixed triggers for the target classes from the training set, DT-IBA can dynamically generate new triggers for any unknown identities. Specifically, an identity hashing network is proposed to first extract target identity information from a reference image, which is then injected into the benign images by image steganography. We extensively validate the effectiveness and stealthiness of the proposed attack on benchmark datasets, and evaluate the effectiveness of several defense methods against our attack

    Learning Domain Invariant Prompt for Vision-Language Models

    Full text link
    Prompt learning is one of the most effective and trending ways to adapt powerful vision-language foundation models like CLIP to downstream datasets by tuning learnable prompt vectors with very few samples. However, although prompt learning achieves excellent performance over in-domain data, it still faces the major challenge of generalizing to unseen classes and domains. Some existing prompt learning methods tackle this issue by adaptively generating different prompts for different tokens or domains but neglecting the ability of learned prompts to generalize to unseen domains. In this paper, we propose a novel prompt learning paradigm that directly generates \emph{domain invariant} prompt that can be generalized to unseen domains, called MetaPrompt. Specifically, a dual-modality prompt tuning network is proposed to generate prompts for input from both image and text modalities. With a novel asymmetric contrastive loss, the representation from the original pre-trained vision-language model acts as supervision to enhance the generalization ability of the learned prompt. More importantly, we propose a meta-learning-based prompt tuning algorithm that explicitly constrains the task-specific prompt tuned for one domain or class to also achieve good performance in another domain or class. Extensive experiments on 11 datasets for base-to-new generalization and 4 datasets for domain generalization demonstrate that our method consistently and significantly outperforms existing methods.Comment: 12 pages, 6 figures, 5 table
    • …
    corecore